Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 3.610
Filtrar
1.
Pattern Recognit ; 1512024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38559674

RESUMO

Machine learning in medical imaging often faces a fundamental dilemma, namely, the small sample size problem. Many recent studies suggest using multi-domain data pooled from different acquisition sites/centers to improve statistical power. However, medical images from different sites cannot be easily shared to build large datasets for model training due to privacy protection reasons. As a promising solution, federated learning, which enables collaborative training of machine learning models based on data from different sites without cross-site data sharing, has attracted considerable attention recently. In this paper, we conduct a comprehensive survey of the recent development of federated learning methods in medical image analysis. We have systematically gathered research papers on federated learning and its applications in medical image analysis published between 2017 and 2023. Our search and compilation were conducted using databases from IEEE Xplore, ACM Digital Library, Science Direct, Springer Link, Web of Science, Google Scholar, and PubMed. In this survey, we first introduce the background of federated learning for dealing with privacy protection and collaborative learning issues. We then present a comprehensive review of recent advances in federated learning methods for medical image analysis. Specifically, existing methods are categorized based on three critical aspects of a federated learning system, including client end, server end, and communication techniques. In each category, we summarize the existing federated learning methods according to specific research problems in medical image analysis and also provide insights into the motivations of different approaches. In addition, we provide a review of existing benchmark medical imaging datasets and software platforms for current federated learning research. We also conduct an experimental study to empirically evaluate typical federated learning methods for medical image analysis. This survey can help to better understand the current research status, challenges, and potential research opportunities in this promising research field.

2.
Heliyon ; 10(5): e27176, 2024 Mar 15.
Artigo em Inglês | MEDLINE | ID: mdl-38562497

RESUMO

Federated learning enables the collaborative training of machine learning models across multiple organizations, eliminating the need for sharing sensitive data. Nevertheless, in practice, the data distributions among these organizations are often non-independent and identically distributed (non-IID), which poses significant challenges for traditional federated learning. To tackle this challenge, we present a hierarchical federated learning framework based on blockchain technology, which is designed to enhance the training of non-IID data., protect data privacy and security, and improve federated learning performance. The framework builds a global shared pool by constructing a blockchain system to reduce the non-IID degree of local data and improve model accuracy. In addition, we use smart contracts to distribute and collect models and design a main blockchain to store local models for federated aggregation, achieving decentralized federated learning. We train the MLP model on the MNIST dataset and the CNN model on the Fashion-MNIST and CIFAR-10 datasets to verify its feasibility and effectiveness. The experimental results show that the proposed strategy significantly improves the accuracy of decentralized federated learning on three tasks with non-IID data.

3.
Heliyon ; 10(7): e29024, 2024 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-38596015

RESUMO

This study investigated the seat layout of automobile interiors and its impact on the fluidity and privacy of interior space using spatial perception and space syntax research methods. The interior of an automobile is a typical "miniature" passenger space. First, to explore the perception characteristics of interior space fluidity and privacy across different seat configurations, we conducted a perception experiment on the interior space of seven automobile models with various seat layouts. The depth, connection, global integration degree, and standardized integration degree values were obtained using spatial syntax to perform topological calculations on the experimental automobile models. We conducted a correlation analysis in conjunction with the results of the perception experiment and the spatial syntax analysis. The calculation results of spatial syntax analysis are consistent with the experimental results of perception of automobile interior space layout on the fluidity and privacy. The different layout of automobile seats can affect people's perception on the fluidity and privacy of automobile interior space. At the same time, spatial syntax can provide an effective design analysis tool for the fluidity and privacy of automobile interior space.

4.
Data Brief ; 54: 110351, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38586131

RESUMO

This dataset presents survey results on concerns for consumer privacy information practises in business, online trust; and purchase intention in the online marketplace in Vietnam. The raw data was collected via an online questionnaire of 467 respondents aged 18 and over recruited randomly. The survey included questions on demographic attributes as well as ratings and rankings for various statements related to privacy information concerns, such as Collection, Unauthorized secondary use (internal), Improper Access, Error; consumer online trust; and purchase intention when shopping online. The de-identified dataset is available in CSV format, including the question/statement text, collection method details, and coded response values. This novel dataset further investigates the impact of privacy information concerns on consumer behaviors in an emergent Southeast Asian e-commerce market. As one of the first collections of empirical data focused distinctly on perspectives within Vietnam, this dataset has high reuse potential for research on information privacy attitudes, responses, and needs within the country and in comparison to regional/global trends.

6.
Artigo em Inglês | MEDLINE | ID: mdl-38573195

RESUMO

OBJECTIVE: To develop and validate a natural language processing (NLP) pipeline that detects 18 conditions in French clinical notes, including 16 comorbidities of the Charlson index, while exploring a collaborative and privacy-enhancing workflow. MATERIALS AND METHODS: The detection pipeline relied both on rule-based and machine learning algorithms, respectively, for named entity recognition and entity qualification, respectively. We used a large language model pre-trained on millions of clinical notes along with annotated clinical notes in the context of 3 cohort studies related to oncology, cardiology, and rheumatology. The overall workflow was conceived to foster collaboration between studies while respecting the privacy constraints of the data warehouse. We estimated the added values of the advanced technologies and of the collaborative setting. RESULTS: The pipeline reached macro-averaged F1-score positive predictive value, sensitivity, and specificity of 95.7 (95%CI 94.5-96.3), 95.4 (95%CI 94.0-96.3), 96.0 (95%CI 94.0-96.7), and 99.2 (95%CI 99.0-99.4), respectively. F1-scores were superior to those observed using alternative technologies or non-collaborative settings. The models were shared through a secured registry. CONCLUSIONS: We demonstrated that a community of investigators working on a common clinical data warehouse could efficiently and securely collaborate to develop, validate and use sensitive artificial intelligence models. In particular, we provided an efficient and robust NLP pipeline that detects conditions mentioned in clinical notes.

7.
Sci China Life Sci ; 2024 Apr 02.
Artigo em Inglês | MEDLINE | ID: mdl-38573362

RESUMO

The human face is a valuable biomarker of aging, but the collection and use of its image raise significant privacy concerns. Here we present an approach for facial data masking that preserves age-related features using coordinate-wise monotonic transformations. We first develop a deep learning model that estimates age directly from non-registered face point clouds with high accuracy and generalizability. We show that the model learns a highly indistinguishable mapping using faces treated with coordinate-wise monotonic transformations, indicating that the relative positioning of facial information is a low-level biomarker of facial aging. Through visual perception tests and computational 3D face verification experiments, we demonstrate that transformed faces are significantly more difficult to perceive for human but not for machines, except when only the face shape information is accessible. Our study leads to a facial data protection guideline that has the potential to broaden public access to face datasets with minimized privacy risks.

8.
Front Robot AI ; 11: 1331347, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38577484

RESUMO

The targeted use of social robots for the family demands a better understanding of multiple stakeholders' privacy concerns, including those of parents and children. Through a co-learning workshop which introduced families to the functions and hypothetical use of social robots in the home, we present preliminary evidence from 6 families that exhibits how parents and children have different comfort levels with robots collecting and sharing information across different use contexts. Conversations and booklet answers reveal that parents adopted their child's decision in scenarios where they expect children to have more agency, such as in cases of homework completion or cleaning up toys, and when children proposed what their parents found to be acceptable reasoning for their decisions. Families expressed relief when they shared the same reasoning when coming to conclusive decisions, signifying an agreement of boundary management between the robot and the family. In cases where parents and children did not agree, they rejected a binary, either-or decision and opted for a third type of response, reflecting skepticism, uncertainty and/or compromise. Our work highlights the benefits of involving parents and children in child- and family-centered research, including parental abilities to provide cognitive scaffolding and personalize hypothetical scenarios for their children.

9.
Sensors (Basel) ; 24(7)2024 Mar 31.
Artigo em Inglês | MEDLINE | ID: mdl-38610451

RESUMO

Smart city is an area where the Internet of things is used effectively with sensors. The data used by smart city can be collected through the cameras, sensors etc. Intelligent video surveillance (IVS) systems integrate multiple networked cameras for automatic surveillance purposes. Such systems can analyze and monitor video data and perform automatic functions required by users. This study performed main path analysis (MPA) to explore the development trends of IVS research. First, relevant articles were retrieved from the Web of Science database. Next, MPA was performed to analyze development trends in relevant research, and g-index and h-index values were analyzed to identify influential journals. Cluster analysis was then performed to group similar articles, and Wordle was used to display the key words of each group in word clouds. These key words served as the basis for naming their corresponding groups. Data mining and statistical analysis yielded six major IVS research topics, namely video cameras, background modeling, closed-circuit television, multiple cameras, person reidentification, and privacy, security, and protection. These topics can boost the future innovation and development of IVS technology and contribute to smart transportation, smart city, and other applications. According to the study results, predictions were made regarding developments in IVS research to provide recommendations for future research.

10.
Front Artif Intell ; 7: 1377011, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38601110

RESUMO

As Artificial Intelligence (AI) becomes more prevalent, protecting personal privacy is a critical ethical issue that must be addressed. This article explores the need for ethical AI systems that safeguard individual privacy while complying with ethical standards. By taking a multidisciplinary approach, the research examines innovative algorithmic techniques such as differential privacy, homomorphic encryption, federated learning, international regulatory frameworks, and ethical guidelines. The study concludes that these algorithms effectively enhance privacy protection while balancing the utility of AI with the need to protect personal data. The article emphasises the importance of a comprehensive approach that combines technological innovation with ethical and regulatory strategies to harness the power of AI in a way that respects and protects individual privacy.

11.
Trends Genet ; 2024 Apr 17.
Artigo em Inglês | MEDLINE | ID: mdl-38637270

RESUMO

Artificial intelligence (AI) in omics analysis raises privacy threats to patients. Here, we briefly discuss risk factors to patient privacy in data sharing, model training, and release, as well as methods to safeguard and evaluate patient privacy in AI-driven omics methods.

12.
Sensors (Basel) ; 24(7)2024 Mar 25.
Artigo em Inglês | MEDLINE | ID: mdl-38610301

RESUMO

Existing secure data aggregation protocols are weaker to eliminate data redundancy and protect wireless sensor networks (WSNs). Only some existing approaches have solved this singular issue when aggregating data. However, there is a need for a multi-featured protocol to handle the multiple problems of data aggregation, such as energy efficiency, authentication, authorization, and maintaining the security of the network. Looking at the significant demand for multi-featured data aggregation protocol, we propose secure data aggregation using authentication and authorization (SDAAA) protocol to detect malicious attacks, particularly cyberattacks such as sybil and sinkhole, to extend network performance. These attacks are more complex to address through existing cryptographic protocols. The proposed SDAAA protocol comprises a node authorization algorithm that permits legitimate nodes to communicate within the network. This SDAAA protocol's methods help improve the quality of service (QoS) parameters. Furthermore, we introduce a mathematical model to improve accuracy, energy efficiency, data freshness, authorization, and authentication. Finally, our protocol is tested in an intelligent healthcare WSN patient-monitoring application scenario and verified using an OMNET++ simulator. Based upon the results, we confirm that our proposed SDAAA protocol attains a throughput of 444 kbs, representing a 98% of data/network channel capacity rate; an energy consumption of 2.6 joules, representing 99% network energy efficiency; an effected network of 2.45, representing 99.5% achieved overall performance of the network; and time complexity of 0.08 s, representing 98.5% efficiency of the proposed SDAAA approach. By contrast, contending protocols such as SD, EEHA, HAS, IIF, and RHC have throughput ranges between 415-443, representing 85-90% of the data rate/channel capacity of the network; energy consumption in the range of 3.0-3.6 joules, representing 88-95% energy efficiency of the network; effected network range of 2.98, representing 72-89% improved overall performance of the network; and time complexity in the range of 0.20 s, representing 72-89% efficiency of the proposed SDAAA approach. Therefore, our proposed SDAAA protocol outperforms other known approaches, such as SD, EEHA, HAS, IIF, and RHC, designed for secure data aggregation in a similar environment.

13.
Sensors (Basel) ; 24(7)2024 Mar 29.
Artigo em Inglês | MEDLINE | ID: mdl-38610408

RESUMO

Data from the Internet of Things (IoT) enables the design of new business models and services that improve user experience and satisfaction. These data serve as important information sources for many domains, including disaster management, biosurveillance, smart cities, and smart health, among others. However, this scenario involves the collection of personal data, raising new challenges related to data privacy protection. Therefore, we aim to provide state-of-the-art information regarding privacy issues in the context of IoT, with a particular focus on findings that utilize the Personal Data Store (PDS) as a viable solution for these concerns. To achieve this, we conduct a systematic mapping review to identify, evaluate, and interpret the relevant literature on privacy issues and PDS-based solutions in the IoT context. Our analysis is guided by three well-defined research questions, and we systematically selected 49 studies published until 2023 from an initial pool of 176 papers. We analyze and discuss the most common privacy issues highlighted by the authors and position the role of PDS technologies as a solution to privacy issues in the IoT context. As a result, our findings reveal that only a small number of works (approximately 20%) were dedicated to presenting solutions for privacy issues. Most works (almost 82%) were published between 2018 and 2023, demonstrating an increased interest in the theme in recent years. Additionally, only two works used PDS-based solutions to deal with privacy issues in the IoT context.

14.
Camb Q Healthc Ethics ; : 1-13, 2024 Apr 12.
Artigo em Inglês | MEDLINE | ID: mdl-38606432

RESUMO

Advances in brain-brain interface technologies raise the possibility that two or more individuals could directly link their minds, sharing thoughts, emotions, and sensory experiences. This paper explores conceptual and ethical issues posed by such mind-merging technologies in the context of clinical neuroethics. Using hypothetical examples along a spectrum from loosely connected pairs to fully merged minds, the authors sketch out a range of factors relevant to identifying the degree of a merger. They then consider potential new harms like loss of identity, psychological domination, loss of mental privacy, and challenges for notions of autonomy and patient benefit when applied to merged minds. While radical technologies may seem to necessitate new ethical paradigms, the authors suggest the individual-focus underpinning clinical ethics can largely accommodate varying degrees of mind mergers so long as individual patient interests remain identifiable. However, advanced decisionmaking and directives may have limitations in addressing the dilemmas posed. Overall, mind-merging possibilities amplify existing challenges around loss of identity, relating to others, autonomy, privacy, and the delineation of patient interests. This paper lays the groundwork for developing resources to address the novel issues raised, while suggesting the technologies reveal continuity with current healthcare ethics tensions.

15.
Front Big Data ; 7: 1384460, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38628874
16.
JMIR Med Inform ; 12: e53075, 2024 Apr 18.
Artigo em Inglês | MEDLINE | ID: mdl-38632712

RESUMO

Background: Pseudonymization has become a best practice to securely manage the identities of patients and study participants in medical research projects and data sharing initiatives. This method offers the advantage of not requiring the direct identification of data to support various research processes while still allowing for advanced processing activities, such as data linkage. Often, pseudonymization and related functionalities are bundled in specific technical and organization units known as trusted third parties (TTPs). However, pseudonymization can significantly increase the complexity of data management and research workflows, necessitating adequate tool support. Common tasks of TTPs include supporting the secure registration and pseudonymization of patient and sample identities as well as managing consent. Objective: Despite the challenges involved, little has been published about successful architectures and functional tools for implementing TTPs in large university hospitals. The aim of this paper is to fill this research gap by describing the software architecture and tool set developed and deployed as part of a TTP established at Charité - Universitätsmedizin Berlin. Methods: The infrastructure for the TTP was designed to provide a modular structure while keeping maintenance requirements low. Basic functionalities were realized with the free MOSAIC tools. However, supporting common study processes requires implementing workflows that span different basic services, such as patient registration, followed by pseudonym generation and concluded by consent collection. To achieve this, an integration layer was developed to provide a unified Representational state transfer (REST) application programming interface (API) as a basis for more complex workflows. Based on this API, a unified graphical user interface was also implemented, providing an integrated view of information objects and workflows supported by the TTP. The API was implemented using Java and Spring Boot, while the graphical user interface was implemented in PHP and Laravel. Both services use a shared Keycloak instance as a unified management system for roles and rights. Results: By the end of 2022, the TTP has already supported more than 10 research projects since its launch in December 2019. Within these projects, more than 3000 identities were stored, more than 30,000 pseudonyms were generated, and more than 1500 consent forms were submitted. In total, more than 150 people regularly work with the software platform. By implementing the integration layer and the unified user interface, together with comprehensive roles and rights management, the effort for operating the TTP could be significantly reduced, as personnel of the supported research projects can use many functionalities independently. Conclusions: With the architecture and components described, we created a user-friendly and compliant environment for supporting research projects. We believe that the insights into the design and implementation of our TTP can help other institutions to efficiently and effectively set up corresponding structures.

17.
Front Genet ; 15: 1272924, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38633409

RESUMO

Biomedical research using human biological material and data is essential for improving human health, but it requires the active participation of many human volunteers in addition to the distribution of data. As a result, it has raised numerous vexing questions related to trust, privacy and consent. Trust is essential in biomedical research as it relates directly to the willingness of participants to continue participating in research. Privacy and the protection of personal information also influence trust. Informed consent has proven to be insufficient as it cannot overcome the informational deficit between primary and unknown future uses of material and data and is therefore not fully informed and invalid. Broad consent is also problematic as it takes full control of samples and data flow from the research participant and inherently requires that a participant must trust that the researcher will use their material or data in a manner that they would find acceptable. This paper attempts to offer some insight into how these related issues can be overcome. It introduces dynamic consent as a consent model in research involving human biological material and its associated data. Dynamic consent is explained, as well as its claims of superiority in instances where future research is possible. It is also shown how dynamic consent contributes to better control of the samples and data by the research participant, and how trust may be improved by using this consent model. Dynamic consent's co-existence with and support of the South African Protection of Personal Information Act of 2013 is also assessed. The limitations of dynamic consent are also discussed.

18.
JMIR Form Res ; 8: e53241, 2024 Apr 22.
Artigo em Inglês | MEDLINE | ID: mdl-38648097

RESUMO

BACKGROUND: Electronic health records are a valuable source of patient information that must be properly deidentified before being shared with researchers. This process requires expertise and time. In addition, synthetic data have considerably reduced the restrictions on the use and sharing of real data, allowing researchers to access it more rapidly with far fewer privacy constraints. Therefore, there has been a growing interest in establishing a method to generate synthetic data that protects patients' privacy while properly reflecting the data. OBJECTIVE: This study aims to develop and validate a model that generates valuable synthetic longitudinal health data while protecting the privacy of the patients whose data are collected. METHODS: We investigated the best model for generating synthetic health data, with a focus on longitudinal observations. We developed a generative model that relies on the generalized canonical polyadic (GCP) tensor decomposition. This model also involves sampling from a latent factor matrix of GCP decomposition, which contains patient factors, using sequential decision trees, copula, and Hamiltonian Monte Carlo methods. We applied the proposed model to samples from the MIMIC-III (version 1.4) data set. Numerous analyses and experiments were conducted with different data structures and scenarios. We assessed the similarity between our synthetic data and the real data by conducting utility assessments. These assessments evaluate the structure and general patterns present in the data, such as dependency structure, descriptive statistics, and marginal distributions. Regarding privacy disclosure, our model preserves privacy by preventing the direct sharing of patient information and eliminating the one-to-one link between the observed and model tensor records. This was achieved by simulating and modeling a latent factor matrix of GCP decomposition associated with patients. RESULTS: The findings show that our model is a promising method for generating synthetic longitudinal health data that is similar enough to real data. It can preserve the utility and privacy of the original data while also handling various data structures and scenarios. In certain experiments, all simulation methods used in the model produced the same high level of performance. Our model is also capable of addressing the challenge of sampling patients from electronic health records. This means that we can simulate a variety of patients in the synthetic data set, which may differ in number from the patients in the original data. CONCLUSIONS: We have presented a generative model for producing synthetic longitudinal health data. The model is formulated by applying the GCP tensor decomposition. We have provided 3 approaches for the synthesis and simulation of a latent factor matrix following the process of factorization. In brief, we have reduced the challenge of synthesizing massive longitudinal health data to synthesizing a nonlongitudinal and significantly smaller data set.

19.
JMIR Med Inform ; 12: e49646, 2024 Apr 23.
Artigo em Inglês | MEDLINE | ID: mdl-38654577

RESUMO

Background: The SARS-CoV-2 pandemic has demonstrated once again that rapid collaborative research is essential for the future of biomedicine. Large research networks are needed to collect, share, and reuse data and biosamples to generate collaborative evidence. However, setting up such networks is often complex and time-consuming, as common tools and policies are needed to ensure interoperability and the required flows of data and samples, especially for handling personal data and the associated data protection issues. In biomedical research, pseudonymization detaches directly identifying details from biomedical data and biosamples and connects them using secure identifiers, the so-called pseudonyms. This protects privacy by design but allows the necessary linkage and reidentification. Objective: Although pseudonymization is used in almost every biomedical study, there are currently no pseudonymization tools that can be rapidly deployed across many institutions. Moreover, using centralized services is often not possible, for example, when data are reused and consent for this type of data processing is lacking. We present the ORCHESTRA Pseudonymization Tool (OPT), developed under the umbrella of the ORCHESTRA consortium, which faced exactly these challenges when it came to rapidly establishing a large-scale research network in the context of the rapid pandemic response in Europe. Methods: To overcome challenges caused by the heterogeneity of IT infrastructures across institutions, the OPT was developed based on programmable runtime environments available at practically every institution: office suites. The software is highly configurable and provides many features, from subject and biosample registration to record linkage and the printing of machine-readable codes for labeling biosample tubes. Special care has been taken to ensure that the algorithms implemented are efficient so that the OPT can be used to pseudonymize large data sets, which we demonstrate through a comprehensive evaluation. Results: The OPT is available for Microsoft Office and LibreOffice, so it can be deployed on Windows, Linux, and MacOS. It provides multiuser support and is configurable to meet the needs of different types of research projects. Within the ORCHESTRA research network, the OPT has been successfully deployed at 13 institutions in 11 countries in Europe and beyond. As of June 2023, the software manages data about more than 30,000 subjects and 15,000 biosamples. Over 10,000 labels have been printed. The results of our experimental evaluation show that the OPT offers practical response times for all major functionalities, pseudonymizing 100,000 subjects in 10 seconds using Microsoft Excel and in 54 seconds using LibreOffice. Conclusions: Innovative solutions are needed to make the process of establishing large research networks more efficient. The OPT, which leverages the runtime environment of common office suites, can be used to rapidly deploy pseudonymization and biosample management capabilities across research networks. The tool is highly configurable and available as open-source software.

20.
Front Public Health ; 12: 1347231, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38655509

RESUMO

Introduction: Medical tourism has grown significantly, raising critical concerns about the privacy of medical tourists. This study investigates privacy issues in medical tourism from a game theoretic perspective, focusing on how stakeholders' strategies impact privacy protection. Methods: We employed an evolutionary game model to explore the interactions between medical institutions, medical tourists, and government departments. The model identifies stable strategies that stakeholders may adopt to protect the privacy of medical tourists. Results: Two primary stable strategies were identified, with E6(1,0,1) emerging as the optimal strategy. This strategy involves active protection measures by medical institutions, the decision by tourists to forgo accountability, and strict supervision by government departments. The evolution of the system's strategy is significantly influenced by the government's penalty intensity, subsidies, incentives, and the compensatory measures of medical institutions. Discussion: The findings suggest that medical institutions are quick to make decisions favoring privacy protection, while medical tourists tend to follow learning and conformity. Government strategy remains consistent, with increased subsidies and penalties encouraging medical institutions towards proactive privacy protection strategies. We recommend policies to enhance privacy protection in medical tourism, contributing to the industry's sustainable growth.


Assuntos
Teoria do Jogo , Turismo Médico , Privacidade , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...